Zeroth-Order Nonconvex Stochastic Optimization: Handling Constraints, High Dimensionality, and Saddle Points
نویسندگان
چکیده
In this paper, we propose and analyze zeroth-order stochastic approximation algorithms for nonconvex convex optimization, with a focus on addressing constrained high-dimensional setting, saddle point avoiding. To handle first generalizations of the conditional gradient algorithm achieving rates similar to standard using only information. facilitate optimization in high dimensions, explore advantages structural sparsity assumptions. Specifically, (i) highlight an implicit regularization phenomenon where information adapts problem at hand by just varying step size (ii) truncated information, whose rate convergence depends poly-logarithmically dimensionality. We next avoiding points setting. Toward that, interpret Gaussian smoothing technique estimating based as instantiation first-order Stein’s identity. Based this, provide novel linear-(in dimension) time estimator Hessian matrix function which is second-order then variant cubic regularized Newton method discuss its local minima.
منابع مشابه
Stochastic Zeroth-order Optimization in High Dimensions
We consider the problem of optimizing a high-dimensional convex function using stochastic zeroth-order queries. Under sparsity assumptions on the gradients or function values, we present two algorithms: a successive component/feature selection algorithm and a noisy mirror descent algorithm using Lasso gradient estimates, and show that both algorithms have convergence rates that depend only loga...
متن کاملStochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming
In this paper, we introduce a new stochastic approximation (SA) type algorithm, namely the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming (SP) problems. We establish the complexity of this method for computing an approximate stationary point of a nonlinear programming problem. We also show that this method pos...
متن کاملZeroth Order Nonconvex Multi-Agent Optimization over Networks
In this paper we consider distributed optimization problems over a multi-agent network, where each agent can only partially evaluate the objective function, and it is allowed to exchange messages with its immediate neighbors. Differently from all existing works on distributed optimization, our focus is given to optimizing a class of difficult non-convex problems, and under the challenging setti...
متن کاملOn Zeroth-Order Stochastic Convex Optimization via Random Walks
We propose a method for zeroth order stochastic convex optimization that attains the suboptimality rate of Õ(n7T−1/2) after T queries for a convex bounded function f : R → R. The method is based on a random walk (the Ball Walk) on the epigraph of the function. The randomized approach circumvents the problem of gradient estimation, and appears to be less sensitive to noisy function evaluations c...
متن کاملDimensionality Reduction for Stationary Time Series via Stochastic Nonconvex Optimization
Stochastic optimization naturally arises in machine learning. E cient algorithms with provable guarantees, however, are still largely missing, when the objective function is nonconvex and the data points are dependent. This paper studies this fundamental challenge through a streaming PCA problem for stationary time series data. Specifically, our goal is to estimate the principle component of ti...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Foundations of Computational Mathematics
سال: 2021
ISSN: ['1615-3383', '1615-3375']
DOI: https://doi.org/10.1007/s10208-021-09499-8